11 research outputs found
Automatic analysis of medical images for change detection in prostate cancer
Prostate cancer is the most common cancer and second most common cause of cancer death in men in the UK. However, the patient risk from the cancer can vary considerably, and the widespread use of prostate-specific antigen (PSA) screening has led to over-diagnosis and over-treatment of low-grade tumours. It is therefore important to be able to differentiate high-grade prostate cancer from the slowly- growing, low-grade cancer. Many of these men with low-grade cancer are placed on active surveillance (AS), which involves constant monitoring and intervention for risk reclassification, relying increasingly on magnetic resonance imaging (MRI) to detect disease progression, in addition to TRUS-guided biopsies which are the routine clinical standard method to use. This results in a need for new tools to process these images. For this purpose, it is important to have a good TRUS-MR registration so corresponding anatomy can be located accurately between the two. Automatic segmentation of the prostate gland on both modalities reduces some of the challenges of the registration, such as patient motion, tissue deformation, and the time of the procedure. This thesis focuses on the use of deep learning methods, specifically convolutional neural networks (CNNs), for prostate cancer management. Chapters 4 and 5 investigated the use of CNNs for both TRUS and MRI prostate gland segmentation, and reported high segmentation accuracies for both, Dice Score Coefficients (DSC) of 0.89 for TRUS segmentations and DSCs between 0.84-0.89 for MRI prostate gland segmentation using a range of networks. Chapter 5 also investigated the impact of these segmentation scores on more clinically relevant measures, such as MRI-TRUS registration errors and volume measures, showing that a statistically significant difference in DSCs did not lead to a statistically significant difference in the clinical measures using these segmentations. The potential of these algorithms in commercial and clinical systems are summarised and the use of the MRI prostate gland segmentation in the application of radiological prostate cancer progression prediction for AS patients are investigated and discussed in Chapter 8, which shows statistically significant improvements in accuracy when using spatial priors in the form of prostate segmentations (0.63 ± 0.16 vs. 0.82 ± 0.18 when comparing whole prostate MRI vs. only prostate gland region, respectively)
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Label-driven weakly-supervised learning for multimodal deformable image registration
Spatially aligning medical images from different modalities remains a
challenging task, especially for intraoperative applications that require fast
and robust algorithms. We propose a weakly-supervised, label-driven formulation
for learning 3D voxel correspondence from higher-level label correspondence,
thereby bypassing classical intensity-based image similarity measures. During
training, a convolutional neural network is optimised by outputting a dense
displacement field (DDF) that warps a set of available anatomical labels from
the moving image to match their corresponding counterparts in the fixed image.
These label pairs, including solid organs, ducts, vessels, point landmarks and
other ad hoc structures, are only required at training time and can be
spatially aligned by minimising a cross-entropy function of the warped moving
label and the fixed label. During inference, the trained network takes a new
image pair to predict an optimal DDF, resulting in a fully-automatic,
label-free, real-time and deformable registration. For interventional
applications where large global transformation prevails, we also propose a
neural network architecture to jointly optimise the global- and local
displacements. Experiment results are presented based on cross-validating
registrations of 111 pairs of T2-weighted magnetic resonance images and 3D
transrectal ultrasound images from prostate cancer patients with a total of
over 4000 anatomical labels, yielding a median target registration error of 4.2
mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
A STUDY OF THE ANTI-INFLAMMATORY EFFECTS OF THE ETHYL ACETATE FRACTION OF THE METHANOL EXTRACT OF FORSYTHIAE FRUCTUS
Background: The dried fruit of Forsythia suspensa (Thunb.) Vahl. (Oleaceae) are better known by their herbal name Forsythiae Fructus, and have a bitter taste, slightly pungent smell, and cold habit. FF has been widely used to treat symptoms associated with the lung, heart, and small intestine. Recently, bioactive compounds isolated from hydrophobic solvent fractions of FF have been reported to have anti-oxidant, anti-bacterial, and anti-cancer effects. Traditionally, almost all herbal medicines are water extracts, and thus, extraction methods should be developed to optimize the practical efficacies of herbal medicines. Materials and Methods: In this study, the anti-inflammatory effects of the ethyl acetate fraction of the methanol extract of FF (FFE) were assessed by measuring NO and PGE2 production byand intracellular ROS and protein levels of iNOS and COX-2in RAW 264.7 cells. Results: FFE inhibited COX-2 expression in LPS-stimulated RAW 264.7 cells. Conclusion: In summary, FFE effectively reduced intracellular ROS and NO levels and inhibited PGE2 production by down- regulating COX-2 levels
Recommended from our members
Exploring a new paradigm for the fetal anomaly ultrasound scan: Artificial intelligence in real time
Weakly-supervised convolutional neural networks for multimodal image registration
One of the fundamental challenges in supervised learning for multimodal image
registration is the lack of ground-truth for voxel-level spatial
correspondence. This work describes a method to infer voxel-level
transformation from higher-level correspondence information contained in
anatomical labels. We argue that such labels are more reliable and practical to
obtain for reference sets of image pairs than voxel-level correspondence.
Typical anatomical labels of interest may include solid organs, vessels, ducts,
structure boundaries and other subject-specific ad hoc landmarks. The proposed
end-to-end convolutional neural network approach aims to predict displacement
fields to align multiple labelled corresponding structures for individual image
pairs during the training, while only unlabelled image pairs are used as the
network input for inference. We highlight the versatility of the proposed
strategy, for training, utilising diverse types of anatomical labels, which
need not to be identifiable over all training image pairs. At inference, the
resulting 3D deformable image registration algorithm runs in real-time and is
fully-automated without requiring any anatomical labels or initialisation.
Several network architecture variants are compared for registering T2-weighted
magnetic resonance images and 3D transrectal ultrasound images from prostate
cancer patients. A median target registration error of 3.6 mm on landmark
centroids and a median Dice of 0.87 on prostate glands are achieved from
cross-validation experiments, in which 108 pairs of multimodal images from 76
patients were tested with high-quality anatomical labels.Comment: Accepted manuscript in Medical Image Analysi
Morphological change forecasting for prostate glands using feature-based registration and kernel density extrapolation
Organ morphology is a key indicator for prostate disease diagnosis and
prognosis. For instance, In longitudinal study of prostate cancer patients
under active surveillance, the volume, boundary smoothness and their changes
are closely monitored on time-series MR image data. In this paper, we describe
a new framework for forecasting prostate morphological changes, as the ability
to detect such changes earlier than what is currently possible may enable
timely treatment or avoiding unnecessary confirmatory biopsies. In this work,
an efficient feature-based MR image registration is first developed to align
delineated prostate gland capsules to quantify the morphological changes using
the inferred dense displacement fields (DDFs). We then propose to use kernel
density estimation (KDE) of the probability density of the DDF-represented
\textit{future morphology changes}, between current and future time points,
before the future data become available. The KDE utilises a novel distance
function that takes into account morphology, stage-of-progression and
duration-of-change, which are considered factors in such subject-specific
forecasting. We validate the proposed approach on image masks unseen to
registration network training, without using any data acquired at the future
target time points. The experiment results are presented on a longitudinal data
set with 331 images from 73 patients, yielding an average Dice score of 0.865
on a holdout set, between the ground-truth and the image masks warped by the
KDE-predicted-DDFs.Comment: Accepted by ISBI 202